Picture for Andrea Passerini

Andrea Passerini

If Concept Bottlenecks are the Question, are Foundation Models the Answer?

Add code
Apr 29, 2025
Viaarxiv icon

Boosting Relational Deep Learning with Pretrained Tabular Models

Add code
Apr 07, 2025
Viaarxiv icon

A Probabilistic Neuro-symbolic Layer for Algebraic Constraint Satisfaction

Add code
Mar 25, 2025
Viaarxiv icon

Simple Path Structural Encoding for Graph Transformers

Add code
Feb 13, 2025
Viaarxiv icon

Beyond Topological Self-Explainable GNNs: A Formal Explainability Perspective

Add code
Feb 04, 2025
Viaarxiv icon

A Self-Explainable Heterogeneous GNN for Relational Deep Learning

Add code
Nov 30, 2024
Viaarxiv icon

Benchmarking XAI Explanations with Human-Aligned Evaluations

Add code
Nov 04, 2024
Figure 1 for Benchmarking XAI Explanations with Human-Aligned Evaluations
Figure 2 for Benchmarking XAI Explanations with Human-Aligned Evaluations
Figure 3 for Benchmarking XAI Explanations with Human-Aligned Evaluations
Figure 4 for Benchmarking XAI Explanations with Human-Aligned Evaluations
Viaarxiv icon

Time Can Invalidate Algorithmic Recourse

Add code
Oct 10, 2024
Viaarxiv icon

xAI-Drop: Don't Use What You Cannot Explain

Add code
Jul 29, 2024
Viaarxiv icon

Perks and Pitfalls of Faithfulness in Regular, Self-Explainable and Domain Invariant GNNs

Add code
Jun 21, 2024
Viaarxiv icon